attractor neural network
Adaptation Accelerating Sampling-based Bayesian Inference in Attractor Neural Networks
The brain performs probabilistic Bayesian inference to interpret the external world. The sampling-based view assumes that the brain represents the stimulus posterior distribution via samples of stochastic neuronal responses. Although the idea of sampling-based inference is appealing, it faces a critical challenge of whether stochastic sampling is fast enough to match the rapid computation of the brain. In this study, we explore how latent stimulus sampling can be accelerated in neural circuits. Specifically, we consider a canonical neural circuit model called continuous attractor neural networks (CANNs) and investigate how sampling-based inference of latent continuous variables is accelerated in CANNs.
Noisy Adaptation Generates Lévy Flights in Attractor Neural Networks
Lévy flights describe a special class of random walks whose step sizes satisfy a power-law tailed distribution. As being an efficientsearching strategy in unknown environments, Lévy flights are widely observed in animal foraging behaviors. Recent studies further showed that human cognitive functions also exhibit the characteristics of Lévy flights. Despite being a general phenomenon, the neural mechanism at the circuit level for generating Lévy flights remains unresolved. Here, we investigate how Lévy flights can be achieved in attractor neural networks.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Japan (0.04)
- Asia > China > Guangdong Province (0.04)
- Asia > China > Beijing > Beijing (0.04)
Adaptation Accelerating Sampling-based Bayesian Inference in Attractor Neural Networks
The brain performs probabilistic Bayesian inference to interpret the external world. The sampling-based view assumes that the brain represents the stimulus posterior distribution via samples of stochastic neuronal responses. Although the idea of sampling-based inference is appealing, it faces a critical challenge of whether stochastic sampling is fast enough to match the rapid computation of the brain. In this study, we explore how latent stimulus sampling can be accelerated in neural circuits. Specifically, we consider a canonical neural circuit model called continuous attractor neural networks (CANNs) and investigate how sampling-based inference of latent continuous variables is accelerated in CANNs.
Noisy Adaptation Generates Lévy Flights in Attractor Neural Networks
Lévy flights describe a special class of random walks whose step sizes satisfy a power-law tailed distribution. As being an efficientsearching strategy in unknown environments, Lévy flights are widely observed in animal foraging behaviors. Recent studies further showed that human cognitive functions also exhibit the characteristics of Lévy flights. Despite being a general phenomenon, the neural mechanism at the circuit level for generating Lévy flights remains unresolved. Here, we investigate how Lévy flights can be achieved in attractor neural networks.
Attractor Neural Networks with Local Inhibition: from Statistical Physics to a Digitial Programmable Integrated Circuit
Networks with local inhibition are shown to have enhanced compu(cid:173) tational performance with respect to the classical Hopfield-like net(cid:173) works. In particular the critical capacity of the network is increased as well as its capability to store correlated patterns. An implementa(cid:173) tion based on a programmable logic device is here presented. A 16 neurons circuit is implemented whit a XILINK 4020 device. The peculiarity of this solution is the possibility to change parts of the project (weights, transfer function or the whole architecture) with a simple software download of the configuration into the XILINK chip.
Optimal Signalling in Attractor Neural Networks
In [Meilijson and Ruppin, 1993] we presented a methodological framework describing the two-iteration performance of Hopfield(cid:173) like attractor neural networks with history-dependent, Bayesian dynamics. We now extend this analysis in a number of directions: input patterns applied to small subsets of neurons, general con(cid:173) nectivity architectures and more efficient use of history. We show that the optimal signal (activation) function has a slanted sigmQidal shape, and provide an intuitive account of activation functions with a non-monotone shape.
Decisional Processes with Boolean Neural Network: the Emergence of Mental Schemes
Barnabei, Graziano, Bagnoli, Franco, Conversano, Ciro, Lensi, Elena
Human decisional processes result from the employment of selected quantities of relevant information, generally synthesized from environmental incoming data and stored memories. Their main goal is the production of an appropriate and adaptive response to a cognitive or behavioral task. Different strategies of response production can be adopted, among which haphazard trials, formation of mental schemes and heuristics. In this paper, we propose a model of Boolean neural network that incorporates these strategies by recurring to global optimization strategies during the learning session. The model characterizes as well the passage from an unstructured/chaotic attractor neural network typical of data-driven processes to a faster one, forward-only and representative of schema-driven processes. Moreover, a simplified version of the Iowa Gambling Task (IGT) is introduced in order to test the model. Our results match with experimental data and point out some relevant knowledge coming from psychological domain.
- North America > United States > Iowa (0.24)
- North America > United States > New York (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (2 more...)
A Neural Model of Delusions and Hallucinations in Schizophrenia
Ruppin, Eytan, Reggia, James A., Horn, David
We implement and study a computational model of Stevens' [19921 theory of the pathogenesis of schizophrenia. This theory hypothesizes that the onset of schizophrenia is associated with reactive synaptic regeneration occurring in brain regions receiving degenerating temporal lobe projections. Concentrating on one such area, the frontal cortex, we model a frontal module as an associative memory neural network whose input synapses represent incoming temporal projections. We analyze how, in the face of weakened external input projections, compensatory strengthening of internal synaptic connections and increased noise levels can maintain memory capacities (which are generally preserved in schizophrenia). However, These compensatory changes adversely lead to spontaneous, biased retrieval of stored memories, which corresponds to the occurrence of schizophrenic delusions and hallucinations without any apparent external trigger, and for their tendency to concentrate on just few central themes. Our results explain why these symptoms tend to wane as schizophrenia progresses, and why delayed therapeutical intervention leads to a much slower response.
- Health & Medicine > Therapeutic Area > Psychiatry/Psychology (1.00)
- Health & Medicine > Therapeutic Area > Neurology (1.00)